WHAT IMPACT COULD ARTIFICIAL INTELLIGENCE HAVE ON ELECTIONS?

The Experts/Researchers adopt a broad understanding of elections that encompasses more than the practical conduct of elections and includes the election campaign and the post-election period, when the election results are published and discussed. In its work, the Experts/Researchers have focused on three main areas where we believe Artificial Intelligence (AI) may have an impact on elections and democracy. In these areas, we believe there is particular cause to be vigilant and prepared to avoid negative consequences for elections and democracy:

– The information and media landscape

– Covert election influence

– The election process and cybersecurity

The primary areas are inspired by the tripartite approach used in its reports AI-enabled influence on elections, which recognises that AI can be used to influence individuals, pollute the information landscape and affect the conduct and infrastructure of elections.

These primary areas are not entirely separate and should be viewed in context. For example, an influence operation may also include cyberattacks or be aimed at amplifying conflicts or changing how the information landscape functions.

A well-functioning information and media landscape with strong editor-controlled media ensures that voters receive quality-assured and credible information and that there is a critical spotlight on issues, parties and candidates, as well as a fact-based public debate that provides voters with an informed basis for casting their vote. A diverse media landscape contributes to broad inclusion and representation of different views.

The information and media landscape has already undergone major changes due to both technological developments that have moved content and readers from paper to the web and new formats, and the fact that social media have become more important news platforms. This has positive aspects, as it provides people with greater access to information and networks. However, it also has negative aspects, such as an increasing fragmentation of the public discourse and an increased concentration of power among some large international platforms and technology companies and individuals, such as X-owner Elon Musk, other owners and influencers.

“Traditional” AI, in the form of algorithms, has long been a fundamental part of how social media works. This has contributed to changes in the information landscape by enabling individual users to receive search results, news, and content based on their own interests and the interests of similar users. Algorithm-driven logic can also result in greater exposure to misinformation and disinformation.

Generative AI reinforces all these tendencies. Such tools contribute to new ways of producing, distributing, systematising and analysing information, and can lead to changes in where and how voters acquire information.

Generated content can appear in a variety of formats, such as text, audio, images and video, making it possible to produce realistic but still fake content more efficiently and with fewer resources. Much of the use will be unproblematic, although some may have the potential to confuse and mislead while remaining within the bounds of freedom of expression. Artificially generated misinformation can become so widespread that it undermines confidence in information in general, thereby contributing to the displacement of credible information. This type of content can, for example, be used to mislead or confuse voters about issues and the election process, undermine political opponents, enhance the emotional engagement of voters, or generate texts for social media and websites. Like other misinformation and disinformation, such content is primarily spread through social platforms.

The big breakthrough for generative AI arrived with chatbots, particularly ChatGPT. Since then, other companies have released their own chatbots. Although the most commonly used chatbots are the general bots used by major companies, there are also more specialised variants that have been trained on limited material to answer questions on a specific topic. Since chatbots provide quick and customised answers, many users choose to use these tools to find information instead of traditional search engines. This shift could be problematic if the language models generate incorrect information about elections, candidates and parties, known as AI hallucinations. Language models can also have intentional or unintentional political biases that are not known to the user, due to the data the model has been trained on. The use of chatbots as a source of information means that the information can be increasingly personalised. The information provided will vary and may not necessarily be available to the general public. Search engines have also begun to include AI-generated summaries in search results, such as Google’s AI Overview. This gives users an AI-generated response instead of referring them to sources.

Chatbots, bot avatars and similar AI-driven services can make a positive contribution to the dissemination of relevant information, particularly through multilingual accessibility and the universal design of documents for users with visual and hearing impairments. However, such technology can present challenges for electoral authorities at both the local and national levels, as they are tasked with ensuring that voters are correctly informed.

AI has already had an impact on the media landscape, both in terms of how editor-run media work and their role in society. The media have actively used AI in certain areas, such as for summaries and in their internal work on research, transcriptions, translations, and text structure. At the same time, the increased prevalence of AI-generated content also means that the media must be more vigilant to avoid being deceived by such content, and they must use more resources to verify content to prevent the publication of inaccurate information. Several have also tried more experimental uses of AI to engage users in new ways.

The media must be given a special framework to support the freedom of expression and public discourse, thus acting as a counterweight to the potential negative effects these developments may have on the election process. The tools used to safeguard editorially controlled media in the face of disinformation, influence operations and the disappearance of younger users should reflect these changes and aim to strengthen the ability of editorially controlled media to compete and innovate in technological developments.

The potential consequences of AI-based deepfakes, misinformation, and chatbots are serious. It makes it more difficult to distinguish between real and fake content, which undermines trust in the information landscape. These changes mean that fewer people are exposed to the same content and information, resulting in a more confusing information and media landscape. Overall, the changes discussed above show that the consequences not only have the potential to affect the election process, but the democratic system as a whole.

AI can create new opportunities for influencing and manipulating both individuals and groups in society. This applies both to the political influence that is a natural part of an election campaign, but also to covert influence, which is the second main area the Experts/researchers have studied.

Accessible and affordable AI tools can lower the bar for conducting influence operations. Not only can it amplify the power of established threat actors, but it can also lead to new and additional threat actors entering the field. These may include both foreign state and non-state actors and might even open up for attempts at covert election interference from actors operating alone or in collaboration with other actors. One example of foreign actors working together in this manner with domestic actors was the dramatic presidential election in Romania held in December 2024.

Covert election influence is not limited to individual events surrounding election day. As the European External Action Service (EEAS) has pointed out, a coordinated influence operation will often take place over a longer period, from long before the election to after it has taken place, and it will have different phases with different intensities of activity. The research report notes that there are different stages of an election process and that threat actors have different objectives for their influence at the different stages:

– Before voting begins: The focus of influence operations is on undermining the reputation of specific candidates or swaying voters’ views on particular issues.

– During the voting period: The focus is on disrupting and overloading the information space and causing voters confusion on particular issues related to the campaign or election.

– After the election: The focus is on undermining confidence in the election results, for example by creating the impression of electoral fraud, which in turn can lead to a more long-term decline in confidence in the democratic processes.

The Experts/Researchers have emphasised three ways in which AI can be used that may pose particular threats in this context:

1. Fake users and websites: Threat actors can use AI to generate various forms of misleading content more effectively than before. Reducing linguistic and, to some extent, cultural barriers. Operating bots and botnets more efficiently. Create and manage fake profiles and fake websites more quickly. There are a number ofexamples where threat actors have used AI tools to create fake websites that either mimic editor-controlled media with high credibility or pretend to be new online newspapers or websites for (non-existent) research institutes or similar. These sites are used to disseminate content where the intent is to exert influence by spreading narratives that the threat actor believes will be of benefit through channels that are intended to appear credible and authentic. This can potentially make it harder to detect inauthentic behaviour on social media, as it is less resource-intensive to vary both the language and expression. With AI, activities of this type can also automate the creation and content production of such fake users and websites.

2. Increased personalisation: A threat actor may use AI as part of an influence operation to influence voter attitudes, opinions, and perceptions. This may include the spread of different types of AI-generated disinformation that is more or less tailored to different groups. By targeting content, it is also possible to tailor messages to the recipient based on, for instance, characteristics such as age, gender, place of residence and other personal aspects based on data the actor has access to. By utilising such opportunities, threat actors can target voters more precisely in order to convince them to vote differently than they otherwise would, or to abstain from voting. The same techniques can be used more broadly to influence the agenda and debate surrounding an election to highlight particular issues, such as those that may be particularly polarising or that can contribute to reducing trust in society.

3. Content overflow: A threat actor can also take advantage of the opportunities provided by AI to create an overflow of AI-generated content, which may include fake news, manipulated videos, manipulated voices and other forms of disinformation. If such disinformation floods various platforms and websites, this could result in a general distrust of the news and other available information. Even more serious is the risk that citizens will no longer trust that others in society are sufficiently informed. Trust in other citizens to make informed choices is crucial to a functioning democracy, but in an information landscape characterised by disinformation, such trust can deteriorate.

Election security has been given increasing attention, and there are a variety of ways in which AI can play a role in this work as well. Threat actors or malicious actors can use AI to carry out destructive activities more quickly, on a larger scale and with fewer resources.

For instance, AI can be used to compromise and manipulate electronic systems used in elections or create chaos through cyberattacks or cyber operations. This may hinder or disrupt the election process itself and create uncertainty about the election results. Such incidents can have an impact in terms of weakened confidence in the integrity of elections and the legitimacy of democratic processes, regardless of whether the attacks actually succeed in inflicting actual harm or making changes to systems.

One example is phishing campaigns, where actors may attempt to defraud or gain access to systems or information, typically by trying to get individuals, such as people associated with election authorities or politicians, to provide information or get them to open links. Using AI tools such as language modelling and voice cloning, threat actors can carry out targeted cyber operations that appear more trustworthy against

individuals. AI tools can also be used in large-scale campaigns aimed at disrupting voting, such as through distributed denial-of-service attacks (DDoS), malware placements, automated phone calls or by making election threats.

AI also provides new tools that can be used for programming and reviewing code. There are clear opportunities here to utilise such tools to facilitate and streamline work on the development of electronic solutions. Code reviews can be useful for uncovering vulnerabilities in electronic solutions, and the European External Action Service has pointed out that “[t]he explosive growth and availability of AI tools may even hold more benefits for defenders than attackers”. Yet there is also a potential for abuse. The threat actors are conducting more thorough reconnaissance attacks to learn how to extract the information they are after, and that new tools make it possible for threat actors with less technical expertise to penetrate systems that should be protected.

In the exercise of future elections, a key part of the Experts/Researchers’ mandate has been to map the experiences of other countries that conducted elections in the recent years. This mapping includes what the countries have done in advance of the election to prepare for the possibility that AI would be used in an unwanted fashion, whether there have been incidents in connection with the election that involved the use of AI and how these incidents were managed, as well as whether AI was believed to have had any impact on the conduct or outcome of the election.

Copyrights ©️

OBSERVERTIMES GLOBAL NEWSNETWORK PRIVATE LIMITED reserves the rights to all content contained within its official website https://observertimes.in /Online Magazine/ Publications

Related posts

Leave a Comment